13 research outputs found

    Probabilistic resource allocation system with self-adaptive capability

    Get PDF
    A probabilistic resource allocation system is disclosed containing a low capacity computational module (Short Term Memory or STM) and a self-organizing associative network (Long Term Memory or LTM) where nodes represent elementary resources, terminal end nodes represent goals, and directed links represent the order of resource association in different allocation episodes. Goals and their priorities are indicated by the user, and allocation decisions are made in the STM, while candidate associations of resources are supplied by the LTM based on the association strength (reliability). Reliability values are automatically assigned to the network links based on the frequency and relative success of exercising those links in the previous allocation decisions. Accumulation of allocation history in the form of an associative network in the LTM reduces computational demands on subsequent allocations. For this purpose, the network automatically partitions itself into strongly associated high reliability packets, allowing fast approximate computation and display of allocation solutions satisfying the overall reliability and other user-imposed constraints. System performance improves in time due to modification of network parameters and partitioning criteria based on the performance feedback

    Probabilistic resource allocation system with self-adaptive capability

    Get PDF
    A probabilistic resource allocation system is disclosed containing a low capacity computational module (Short Term Memory or STM) and a self-organizing associative network (Long Term Memory or LTM) where nodes represent elementary resources, terminal end nodes represent goals, and weighted links represent the order of resource association in different allocation episodes. Goals and their priorities are indicated by the user, and allocation decisions are made in the STM, while candidate associations of resources are supplied by the LTM based on the association strength (reliability). Weights are automatically assigned to the network links based on the frequency and relative success of exercising those links in the previous allocation decisions. Accumulation of allocation history in the form of an associative network in the LTM reduces computational demands on subsequent allocations. For this purpose, the network automatically partitions itself into strongly associated high reliability packets, allowing fast approximate computation and display of allocation solutions satisfying the overall reliability and other user-imposed constraints. System performance improves in time due to modification of network parameters and partitioning criteria based on the performance feedback

    Brain Functional Architecture and Human Understanding

    Get PDF
    The opening line in Aristotle’s Metaphysics asserts that “humans desire to understand”, establishing understanding as the defining characteristic of the human mind and human species. What is understanding and what role does it play in cognition, what advantages does it confer, what brain mechanisms are involved? The Webster’s Dictionary defines understanding as “apprehending general relations in a multitude of particulars.” A proposal discussed in this chapter defines understanding as a form of active inference in self-adaptive systems seeking to expand their inference domains while minimizing metabolic costs incurred in the expansions. Under the same proposal, understanding is viewed as an advanced adaptive mechanism involving self-directed construction of mental models establishing relations between domain entities. Understanding complements learning and serves to overcome the inertia of learned behavior when conditions are unfamiliar or deviate from those experienced in the past. While learning is common across all animals, understanding is unique to the human species. This chapter will unpack these notions, focusing on different facets of understanding. The proposal formulates hypotheses regarding the underlying neuronal mechanisms, attempting to assess their plausibility and reconcile them with the recent ideas and findings concerning brain functional architecture

    Thermodynamic Computing

    Get PDF
    The hardware and software foundations laid in the first half of the 20th Century enabled the computing technologies that have transformed the world, but these foundations are now under siege. The current computing paradigm, which is the foundation of much of the current standards of living that we now enjoy, faces fundamental limitations that are evident from several perspectives. In terms of hardware, devices have become so small that we are struggling to eliminate the effects of thermodynamic fluctuations, which are unavoidable at the nanometer scale. In terms of software, our ability to imagine and program effective computational abstractions and implementations are clearly challenged in complex domains. In terms of systems, currently five percent of the power generated in the US is used to run computing systems - this astonishing figure is neither ecologically sustainable nor economically scalable. Economically, the cost of building next-generation semiconductor fabrication plants has soared past $10 billion. All of these difficulties - device scaling, software complexity, adaptability, energy consumption, and fabrication economics - indicate that the current computing paradigm has matured and that continued improvements along this path will be limited. If technological progress is to continue and corresponding social and economic benefits are to continue to accrue, computing must become much more capable, energy efficient, and affordable. We propose that progress in computing can continue under a united, physically grounded, computational paradigm centered on thermodynamics. Herein we propose a research agenda to extend these thermodynamic foundations into complex, non-equilibrium, self-organizing systems and apply them holistically to future computing systems that will harness nature's innate computational capacity. We call this type of computing "Thermodynamic Computing" or TC.Comment: A Computing Community Consortium (CCC) workshop report, 36 page

    The Understanding Capacity and Information Dynamics in the Human Brain

    No full text
    This article proposes a theory of neuronal processes underlying cognition, focusing on the mechanisms of understanding in the human brain. Understanding is a product of mental modeling. The paper argues that mental modeling is a form of information production inside the neuronal system extending the reach of human cognition “beyond the information given„ (Bruner, J.S., Beyond the Information Given, 1973). Mental modeling enables forms of learning and prediction (learning with understanding and prediction via explanation) that are unique to humans, allowing robust performance under unfamiliar conditions having no precedents in the past history. The proposed theory centers on the notions of self-organization and emergent properties of collective behavior in the neuronal substrate. The theory motivates new approaches in the design of intelligent artifacts (machine understanding) that are complementary to those underlying the technology of machine learning

    Life and Understanding: The Origins of “Understanding” in Self-Organizing Nervous Systems

    Get PDF
    This article is motivated by the formulation of biotic self-organisation in (Friston, 2013), where the emergence of ‘life’ in coupled material entities (e.g., macromolecules) was predicated on bounded subsets that maintain a degree of statistical independence from the rest of the network. Boundary elements in such systems or units constitute a Markov blanket; separating the internal states of the unit from its surrounding states. In this paper, we ask whether Markov blankets operate in the nervous system and underlie the development of intelligence, enabling a progression from the ability to sense the environment to the ability to understand it. Markov blankets have been previously hypothesized to form in neuronal networks as a result of phase transitions that cause network subsets to fold into bounded assemblies, or packets (Yufik, 1998a). The ensuing neuronal packets hypothesis builds on the notion of neuronal assemblies (Hebb, 1949, 1980), treating such assemblies as flexible but stable biophysical structures capable of withstanding entropic erosion. In other words, structures that maintain their integrity under changing conditions. In this treatment, neuronal packets give rise to perception of ’objects’; i.e., quasi-stable (stimulus bound) groupings that are conserved over multiple presentations (e.g., the experience of perceiving ‘apple’ can be interrupted and resumed many times). Monitoring the variations in such groups enables the apprehension of behaviour; i.e., attributing to objects the ability to undergo changes without loss of self-identity. Ultimately, ‘understanding’ involves self-directed composition and manipulation of the ensuing ‘mental models’ that are constituted by neuronal packets, whose dynamics capture relationships among objects: that is, dependencies in the behaviour of objects under varying conditions. For example, movement is known to involve rotation of population vectors in the motor cortex (Georgopoulos et al, 1993; Georgopoulos et al, 1988). The neuronal packet hypothesis associates ‘understanding’ with the ability to detect and generate coordinated rotation of population vectors – in neuronal packets –in associative cortex and other regions in the brain. The ability to coordinate vector representations in this way is assumed to have developed in conjunction with the ability to postpone overt motor expression of implicit movement, thus creating a mechanism for predictio

    Editorial: Self-Organization in the Nervous System

    No full text
    “Self-organization is the spontaneous—often seemingly purposeful—formation of spatial, temporal, spatiotemporal structures, or functions in systems composed of few or many components. In physics, chemistry and biology self-organization occurs in open systems driven away from thermal equilibrium” (Haken, Scholarpedia). The contributions in this special issue aim to elucidate the role of self-organization in shaping the cognitive processes in the course of development and throughout evolution, or “from paramecia to Einstein” (Torday and Miller). The central question is: what self-organizing mechanisms in the human nervous system are common to all forms of life, and what mechanisms (if any) are unique to the human species

    Self-Organization in the Nervous System

    No full text
    This special issue reviews state-of-the-art approaches to the biophysical roots of cognition. These approaches appeal to the notion that cognitive capacities serve to optimize responses to changing external conditions. Crucially, this optimisation rests on the ability to predict changes in the environment, thus allowing organisms to respond pre-emptively to changes before their onset. The biophysical mechanisms that underwrite these cognitive capacities remain largely unknown; although a number of hypotheses has been advanced in systems neuroscience, biophysics and other disciplines. These hypotheses converge on the intersection of thermodynamic and information-theoretic formulations of self-organization in the brain. The latter perspective emerged when Shannon’s theory of message transmission in communication systems was used to characterise message passing between neurons. In its subsequent incarnations, the information theory approach has been integrated into computational neuroscience and the Bayesian brain framework. The thermodynamic formulation rests on a view of the brain as an aggregation of stochastic microprocessors (neurons), with subsequent appeal to the constructs of statistical mechanics and thermodynamics. In particular, the use of ensemble dynamics to elucidate the relationship between micro-scale parameters and those of the macro-scale aggregation (the brain). In general, the thermodynamic approach treats the brain as a dissipative system and seeks to represent the development and functioning of cognitive mechanisms as collective capacities that emerge in the course of self-organization. Its explicanda include energy efficiency; enabling progressively more complex cognitive operations such as long-term prediction and anticipatory planning. A cardinal example of the Bayesian brain approach is the free energy principle that explains self-organizing dynamics in the brain in terms of its predictive capabilities – and selective sampling of sensory inputs that optimise variational free energy as a proxy for Bayesian model evidence. An example of thermodynamically grounded proposals, in this issue, associates self-organization with phase transitions in neuronal state-spaces; resulting in the formation of bounded neuronal assemblies (neuronal packets). This special issue seeks a discourse between thermodynamic and informational formulations of the self-organising and self-evidencing brain. For example, could minimization of thermodynamic free energy during the formation of neuronal packets underlie minimization of variational free energy
    corecore